Time series prediction using LSTM

  • The dataset represent the macro economy of the US during 1965 to 2015 (51 years). There are 6 features inflation, wage, unemployment, consumption, investment, and interest rate.
  • There are no correlation between the variables.
  • (1) select one feature as target to predict
  • (2) Built deep learning model using RNN or CNN with other 5 features to predict the target value for next one time step
  • (3) optional , predict few time teps

  • summarize which model choose , the training and test procedure, the evaluation criteria and the result

In [2]:
#import library

import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
pd.set_option('display.max_columns', None)
plt.style.use('ggplot')
%matplotlib inline

from sklearn import preprocessing
from sklearn.model_selection import train_test_split
from sklearn import model_selection
from sklearn import linear_model

from math import sqrt
from numpy import concatenate
from pandas import concat
from pandas import DataFrame
from sklearn.preprocessing import MinMaxScaler
from sklearn.metrics import mean_squared_error

from keras.models import Sequential
from keras.layers import Dense
from keras.layers import LSTM,RNN
from keras.layers import Dropout
from keras.models import load_model


import warnings
warnings.filterwarnings("ignore")

EDA

  • First we do basic EDA to look how the individual macro Economy variables changes over the time.
In [3]:
# read the data
df = pd.read_excel('/Users/akshay/Downloads/ADP_challange/ADP_lead_datascientist/USMacroData.xls')
df.head()
Out[3]:
Month Inflation Wage Unemployment Consumption Investment InterestRate
0 1965-01-01 1.557632 3.200000 4.9 6.972061 12.3 3.90
1 1965-02-01 1.557632 3.600000 5.1 7.811330 13.2 3.98
2 1965-03-01 1.242236 4.000000 4.7 7.828032 18.7 4.04
3 1965-04-01 1.552795 3.585657 4.8 8.477938 9.8 4.09
4 1965-05-01 1.552795 3.968254 4.6 7.139364 10.2 4.10
In [4]:
df.shape
Out[4]:
(612, 7)
In [5]:
# check null values 
df.isnull().sum()
Out[5]:
Month           0
Inflation       0
Wage            0
Unemployment    0
Consumption     0
Investment      0
InterestRate    0
dtype: int64
In [6]:
# distribution plot of the columns 
col_name=df.columns

plt.rc("font", size=13)
plt.rcParams["figure.figsize"] = [30,25]
alpha=0.6
for i,col in enumerate(col_name[1:]):
    plt.subplot(7, 3,i+1)
    sns.kdeplot(df[str(col)],shade=True, color="b")
  • from the distribution of the economy variable shows that they are not normally distributed.
In [7]:
# summary statistics 
df.describe()
Out[7]:
Inflation Wage Unemployment Consumption Investment InterestRate
count 612.000000 612.000000 612.000000 612.000000 612.000000 612.000000
mean 4.063441 4.258867 6.128922 6.924728 4.958170 5.484281
std 2.614691 1.983816 1.647951 2.899034 7.742085 3.743890
min 0.610698 0.763359 3.400000 -3.367687 -26.800000 0.070000
25% 2.194473 2.731368 5.000000 5.031333 0.700000 3.070000
50% 3.185442 3.773914 5.800000 6.628809 5.800000 5.290000
75% 4.976471 5.944528 7.300000 9.047405 10.300000 7.637500
max 13.585434 9.228442 10.800000 13.574097 24.800000 19.100000
In [9]:
# check correlation between variables 
corr = df.corr()
corr.style.background_gradient(cmap='coolwarm')
Out[9]:
Inflation Wage Unemployment Consumption Investment InterestRate
Inflation 1 0.778155 0.191886 0.61782 -0.341421 0.773616
Wage 0.778155 1 -0.0685292 0.703745 -0.125412 0.647482
Unemployment 0.191886 -0.0685292 1 -0.097183 -0.0382862 -0.0278087
Consumption 0.61782 0.703745 -0.097183 1 0.203165 0.655305
Investment -0.341421 -0.125412 -0.0382862 0.203165 1 -0.234573
InterestRate 0.773616 0.647482 -0.0278087 0.655305 -0.234573 1
  • Correlation plot shows that there is no highly corelation between variables apart from inflation and wage and consumpltions.

US macro economy variables over the time

In [10]:
# Inflation 
import plotly.graph_objects as go
import datetime

fig = go.Figure([go.Scatter(x=df['Month'], y=df['Inflation'],line_color='red')])
fig.update_layout(title="Historical inflation rate ", xaxis_title="Year", yaxis_title="Inflation")
fig.show()
fig.write_image("images/inflation.png")
In [11]:
# wage
fig = go.Figure([go.Scatter(x=df['Month'], y=df['Wage'],line_color='deepskyblue')])
fig.update_layout(title="Historical Wage ", xaxis_title="Year", yaxis_title="Wage")
fig.show()
fig.write_image("images/wage.png")
In [12]:
# Unemployment
fig = go.Figure([go.Scatter(x=df['Month'], y=df['Unemployment'],line_color='green')])
fig.update_layout(title="Historical Unemployment ", xaxis_title="Year", yaxis_title="Unemployment")
fig.show()
fig.write_image("images/Unemployment.png")
In [13]:
# consumption
fig = go.Figure([go.Scatter(x=df['Month'], y=df['Consumption'],line_color='blue')])
fig.update_layout(title="Historical Consumption ", xaxis_title="Year", yaxis_title="Consumption")
fig.show()
fig.write_image("images/Consumption.png")
In [14]:
#investment
fig = go.Figure([go.Scatter(x=df['Month'], y=df['Investment'],line_color='orange')])
fig.update_layout(title="Historical Investment ", xaxis_title="Year", yaxis_title="Investment")
fig.show()
fig.write_image("images/Investment.png")
In [15]:
fig = go.Figure([go.Scatter(x=df['Month'], y=df['InterestRate'],line_color='magenta')])
fig.update_layout(title="Historical InterestRate ", xaxis_title="Year", yaxis_title="InterestRate")
fig.show()
fig.write_image("images/InterestRate.png")
In [16]:
# combining all plots
fig = go.Figure()
fig.add_trace(go.Scatter(x=df['Month'], y=df['Inflation'],line_color='red',name="Inflation",opacity=0.8))
fig.add_trace(go.Scatter(x=df['Month'], y=df['Wage'],line_color='deepskyblue',name="Wage",opacity=0.8))
fig.add_trace(go.Scatter(x=df['Month'], y=df['Unemployment'],line_color='green',name="Unemployment",opacity=0.8))
fig.add_trace(go.Scatter(x=df['Month'], y=df['Consumption'],line_color='blue',name="Consumption",opacity=0.8))
fig.add_trace(go.Scatter(x=df['Month'], y=df['Investment'],line_color='orange',name="Investment",opacity=0.8))
fig.add_trace(go.Scatter(x=df['Month'], y=df['InterestRate'],line_color='magenta',name="InterestRate",opacity=0.8))

# Use date string to set xaxis range
fig.update_layout(xaxis_range=['1965-01-01','2016-01-01'],title_text="USA macroeconomy variables over time",xaxis_title="Year")
fig.show()
fig.write_image("images/economy_variables.png")
  • from historical economy variables, we see that Interest rate is almost get near 0 since 2008-2009. Historically, investment are very fluctuative.

Multivariate time series forcasting

Deep learning- Recurrent Neural Network - LSTM

Convert problem in supervised learning

  • First convert time series problem to supervise ML problem
  • We predict the Inflation rate at current month using Inflation rate and other given economy variables at past 12 months. We can do more experiments to chose best past months to predict future month Inflation if time permits
  • We use sliding window method to predict one step forcasting at a time.

  • we use shift functon to shift the columns to 12 time lags

In [17]:
df.head()
Out[17]:
Month Inflation Wage Unemployment Consumption Investment InterestRate
0 1965-01-01 1.557632 3.200000 4.9 6.972061 12.3 3.90
1 1965-02-01 1.557632 3.600000 5.1 7.811330 13.2 3.98
2 1965-03-01 1.242236 4.000000 4.7 7.828032 18.7 4.04
3 1965-04-01 1.552795 3.585657 4.8 8.477938 9.8 4.09
4 1965-05-01 1.552795 3.968254 4.6 7.139364 10.2 4.10
In [18]:
df.shape
Out[18]:
(612, 7)
In [20]:
# normalised the data and create supervised dataframe
#  dataset with input columns 
df1 = df[['Inflation','Wage','Unemployment', 'Consumption','Investment', 'InterestRate']]
values = df1.values
print(values.shape)
# normalize features
# the default activation function in RNN is tanh( range -1 to 1), so prefered way to scale data in range -1 to 1 using MinMaxScalar. 
# this function requre matrix form of the data so need to use reshaped data if not in proper form.
scaler = MinMaxScaler(feature_range=(0, 1))
scaled = scaler.fit_transform(values)
(612, 6)
In [21]:
# function to convert data to supervised learning problem
# pandas shift function to get n-lag ( 12 months past ) for the data
# ref. from www.machinelearningmastery.com

#data: Sequence of observations as a list or 2D NumPy array
#n_in: Number of lag observations as input (X). Values may be between [1..len(data)] Optional. Defaults to 1.
#n_out: Number of observations as output (y). Values may be between [0..len(data)-1]. Optional. Defaults to 1.
#dropnan: Boolean whether or not to drop rows with NaN values. Optional. Defaults to True.

def series_to_supervised(data, n_in=1, n_out=1, dropnan=True):
    n_vars = 1 if type(data) is list else data.shape[1]
    df = DataFrame(data)
    cols, names = list(), list()
    # input sequence (t-n, ... t-1)
    for i in range(n_in, 0, -1):
        cols.append(df.shift(i))
        names += [('var%d(t-%d)' % (j+1, i)) for j in range(n_vars)]
    # forecast sequence (t, t+1, ... t+n)
    for i in range(0, n_out):
        cols.append(df.shift(-i))
        if i == 0:
            names += [('var%d(t)' % (j+1)) for j in range(n_vars)]
        else:
            names += [('var%d(t+%d)' % (j+1, i)) for j in range(n_vars)]
    # put it all together
    agg = concat(cols, axis=1)
    agg.columns = names
    # drop rows with NaN values
    if dropnan:
        agg.dropna(inplace=True)
    return agg
In [22]:
# prepared the data
n_past_months = 12  # specify the number of lag months-12 months 
n_features = 6  # Specify number of features- 6 features 

reframed = series_to_supervised(scaled, n_past_months, 1)
reframed.head()
print(reframed.shape)
(600, 78)

Data preparation for Deep learning model training

  • RNN-LSTM layer expects input data in the form of 3 dimention(samples, time steps, features).
  • using reshape function we convert data in to 3 D array which pass to LSTM
  • Samples: independent observations from the data, typically rows of data.
  • Time steps: lag time steps which used to predict future ( in our case 12 months )
  • Features: number of feature column use for prediction ( in our case 6 )

  • we take 46 years data for training and 5 years ( 60 months) data for test

  • we train LSTM model on training data and validate on test data.

LSTM architect

  • We used LSTM RNN architect using Keras API to create network to predict the Inflation rate.
  • we defined 50 neurons( memory unit or block) in first hidden layer and 1 neuron in the outpur layer for prediction.
  • input shape in 12 time legs with 6 features
  • we also added dropout layer which redomly drops the neurons with 20% rate which help to reduce the overfitting
  • we use mean absolute error( MAE) loss function and ADAM stocastics gradient descents.
  • we set training epochs 100 with full data batch size.
  • we saved the model in h5 file
  • we also track training and test loss during training by setting validation_data as argument in the fit function. We also plotted both training and test loss
  • we forcast Inflation rate using previous 12 months Inflation rate and previous other economy variables
In [23]:
# split into train and test sets
# specify the test months you want to use for testing the model
values = reframed.values

n_test_months = 60
train = values[:-n_test_months, :] 
test =values[-n_test_months:,:]
  • selecting input feature and target feature from modified( supervised) dataframe
  • target_feature = 6->Inflation, 5-> wage , 4 -> Unemployment 3-> Consumption, 2->Investment, 1->InterestRate

  • creating 6 models to predict 6 variables

  • this will useful to predict any perticular feature for multiple future months
In [24]:
# function for model training 
# take target_feature as output variables
# train and save the model

def model_training(target_feature, model_name):
    #target_feature =6

    # total number of columns for model
    n_col = n_past_months * n_features
    
    global test_X
    global test_y
    global train_X
    global train_y
    train_X, train_y = train[:, :n_col], train[:, -target_feature]
    test_X, test_y = test[:, :n_col], test[:, -target_feature]
    #print(train_X.shape, train_y.shape)
    # reshape input to be 3D [samples, timesteps, features]
    train_X = train_X.reshape((train_X.shape[0], n_past_months, n_features))
    test_X = test_X.reshape((test_X.shape[0], n_past_months, n_features))
    #print(train_X.shape, train_y.shape, test_X.shape, test_y.shape)
    
    # design LSTM network
    model = Sequential()
    model.add(LSTM(50, input_shape=(train_X.shape[1], train_X.shape[2]))) # (inputshape = n_months , n_features)
    model.add(Dropout(0.2))
    # Compiling the LSTM
    model.add(Dense(1))
    model.compile(loss='mae', optimizer='adam')
    
    # fit network
    history = model.fit(train_X, train_y, epochs=100, validation_data=(test_X, test_y), verbose=2, shuffle=False)
    #history = model.fit(train_X, train_y, epochs=100, verbose=2, shuffle=False)

    # save model and architecture to single h5 file
    model.save("/Users/akshay/Downloads/ADP_challange/ADP_lead_datascientist/models/"+model_name+".h5")
    
    # plot history
    plt.plot(history.history['loss'], label='train')
    plt.plot(history.history['val_loss'], label='test')
    plt.legend()
    plt.xlabel('epochs')
    plt.ylabel('mae')
    #plt.show()
    plt.savefig("/Users/akshay/Downloads/ADP_challange/ADP_lead_datascientist/images/"+model_name+"_error.png")
    plt.show()
In [25]:
# function to make prediction on test data ( 60 months )
# we predict current month by using past obersevations
# we made prediction up to 60'th months in test data 

def model_prediction(target):
    # load model
    model = load_model("/Users/akshay/Downloads/ADP_challange/ADP_lead_datascientist/models/model_"+target+".h5")
    # summarize model
    #model.summary()
    # make a prediction
    
    global test_X
    global test_y
    
    test_X = test_X.reshape((test_X.shape[0], n_past_months, n_features))
    yhat = model.predict(test_X)
    test_X = test_X.reshape((test_X.shape[0], n_past_months*n_features))

    # invert scaling for forecast
    # concatinate for consistancey in inverse transform
    # might need to change slicing in test_x, check it in detail 

    inv_yhat = concatenate((yhat, test_X[:, -5:]), axis=1)
    inv_yhat = scaler.inverse_transform(inv_yhat)
    inv_yhat = inv_yhat[:,0]

    # invert scaling for actual
    test_y = test_y.reshape((len(test_y), 1))
    inv_y = concatenate((test_y, test_X[:, -5:]), axis=1)
    inv_y = scaler.inverse_transform(inv_y)
    inv_y = inv_y[:,0]

    # calculate RMSE
    rmse = sqrt(mean_squared_error(inv_y, inv_yhat))
    print('Test RMSE: %.3f' % rmse)

    # Visualising the results
    plt.plot(inv_y, color = 'red', label = 'actual value')
    plt.plot(inv_yhat, color = 'blue', label = 'Predicted value')
    plt.title(target+' economy variable Prediction')
    plt.xlabel('Time(months)')
    plt.ylabel(target)
    plt.legend()
    #plt.show()
    plt.savefig("/Users/akshay/Downloads/ADP_challange/ADP_lead_datascientist/images/"+target+"_prediction.png")
    plt.show()
    
In [103]:
model_training(6, 'model_Inflation')
Train on 540 samples, validate on 60 samples
Epoch 1/100
 - 5s - loss: 0.2043 - val_loss: 0.0831
Epoch 2/100
 - 0s - loss: 0.1222 - val_loss: 0.0171
Epoch 3/100
 - 0s - loss: 0.0834 - val_loss: 0.0614
Epoch 4/100
 - 0s - loss: 0.0808 - val_loss: 0.0316
Epoch 5/100
 - 0s - loss: 0.0746 - val_loss: 0.0158
Epoch 6/100
 - 0s - loss: 0.0621 - val_loss: 0.0401
Epoch 7/100
 - 0s - loss: 0.0724 - val_loss: 0.0479
Epoch 8/100
 - 0s - loss: 0.0638 - val_loss: 0.0407
Epoch 9/100
 - 0s - loss: 0.0600 - val_loss: 0.0136
Epoch 10/100
 - 0s - loss: 0.0574 - val_loss: 0.0309
Epoch 11/100
 - 0s - loss: 0.0596 - val_loss: 0.0580
Epoch 12/100
 - 0s - loss: 0.0562 - val_loss: 0.0588
Epoch 13/100
 - 0s - loss: 0.0525 - val_loss: 0.0216
Epoch 14/100
 - 0s - loss: 0.0526 - val_loss: 0.0226
Epoch 15/100
 - 0s - loss: 0.0479 - val_loss: 0.0269
Epoch 16/100
 - 0s - loss: 0.0560 - val_loss: 0.0541
Epoch 17/100
 - 0s - loss: 0.0528 - val_loss: 0.0445
Epoch 18/100
 - 0s - loss: 0.0551 - val_loss: 0.0191
Epoch 19/100
 - 0s - loss: 0.0503 - val_loss: 0.0131
Epoch 20/100
 - 0s - loss: 0.0503 - val_loss: 0.0481
Epoch 21/100
 - 0s - loss: 0.0555 - val_loss: 0.0281
Epoch 22/100
 - 0s - loss: 0.0547 - val_loss: 0.0105
Epoch 23/100
 - 0s - loss: 0.0418 - val_loss: 0.0142
Epoch 24/100
 - 0s - loss: 0.0536 - val_loss: 0.0207
Epoch 25/100
 - 0s - loss: 0.0433 - val_loss: 0.0280
Epoch 26/100
 - 0s - loss: 0.0464 - val_loss: 0.0100
Epoch 27/100
 - 0s - loss: 0.0377 - val_loss: 0.0233
Epoch 28/100
 - 0s - loss: 0.0504 - val_loss: 0.0266
Epoch 29/100
 - 0s - loss: 0.0384 - val_loss: 0.0185
Epoch 30/100
 - 0s - loss: 0.0402 - val_loss: 0.0129
Epoch 31/100
 - 0s - loss: 0.0394 - val_loss: 0.0254
Epoch 32/100
 - 0s - loss: 0.0502 - val_loss: 0.0268
Epoch 33/100
 - 0s - loss: 0.0434 - val_loss: 0.0108
Epoch 34/100
 - 0s - loss: 0.0440 - val_loss: 0.0112
Epoch 35/100
 - 0s - loss: 0.0443 - val_loss: 0.0170
Epoch 36/100
 - 0s - loss: 0.0462 - val_loss: 0.0228
Epoch 37/100
 - 0s - loss: 0.0408 - val_loss: 0.0191
Epoch 38/100
 - 0s - loss: 0.0366 - val_loss: 0.0111
Epoch 39/100
 - 0s - loss: 0.0472 - val_loss: 0.0179
Epoch 40/100
 - 0s - loss: 0.0412 - val_loss: 0.0214
Epoch 41/100
 - 0s - loss: 0.0384 - val_loss: 0.0153
Epoch 42/100
 - 0s - loss: 0.0346 - val_loss: 0.0163
Epoch 43/100
 - 0s - loss: 0.0440 - val_loss: 0.0223
Epoch 44/100
 - 0s - loss: 0.0346 - val_loss: 0.0259
Epoch 45/100
 - 0s - loss: 0.0427 - val_loss: 0.0093
Epoch 46/100
 - 0s - loss: 0.0348 - val_loss: 0.0163
Epoch 47/100
 - 0s - loss: 0.0447 - val_loss: 0.0189
Epoch 48/100
 - 0s - loss: 0.0343 - val_loss: 0.0221
Epoch 49/100
 - 0s - loss: 0.0361 - val_loss: 0.0080
Epoch 50/100
 - 0s - loss: 0.0335 - val_loss: 0.0235
Epoch 51/100
 - 0s - loss: 0.0419 - val_loss: 0.0215
Epoch 52/100
 - 0s - loss: 0.0346 - val_loss: 0.0215
Epoch 53/100
 - 0s - loss: 0.0355 - val_loss: 0.0080
Epoch 54/100
 - 0s - loss: 0.0339 - val_loss: 0.0219
Epoch 55/100
 - 0s - loss: 0.0338 - val_loss: 0.0262
Epoch 56/100
 - 0s - loss: 0.0295 - val_loss: 0.0195
Epoch 57/100
 - 0s - loss: 0.0313 - val_loss: 0.0084
Epoch 58/100
 - 0s - loss: 0.0301 - val_loss: 0.0096
Epoch 59/100
 - 0s - loss: 0.0306 - val_loss: 0.0280
Epoch 60/100
 - 0s - loss: 0.0348 - val_loss: 0.0307
Epoch 61/100
 - 0s - loss: 0.0293 - val_loss: 0.0151
Epoch 62/100
 - 0s - loss: 0.0349 - val_loss: 0.0085
Epoch 63/100
 - 0s - loss: 0.0274 - val_loss: 0.0178
Epoch 64/100
 - 0s - loss: 0.0347 - val_loss: 0.0271
Epoch 65/100
 - 0s - loss: 0.0309 - val_loss: 0.0170
Epoch 66/100
 - 0s - loss: 0.0380 - val_loss: 0.0087
Epoch 67/100
 - 0s - loss: 0.0311 - val_loss: 0.0107
Epoch 68/100
 - 0s - loss: 0.0374 - val_loss: 0.0169
Epoch 69/100
 - 0s - loss: 0.0299 - val_loss: 0.0216
Epoch 70/100
 - 0s - loss: 0.0307 - val_loss: 0.0076
Epoch 71/100
 - 0s - loss: 0.0278 - val_loss: 0.0106
Epoch 72/100
 - 0s - loss: 0.0324 - val_loss: 0.0228
Epoch 73/100
 - 0s - loss: 0.0280 - val_loss: 0.0209
Epoch 74/100
 - 0s - loss: 0.0288 - val_loss: 0.0080
Epoch 75/100
 - 0s - loss: 0.0273 - val_loss: 0.0077
Epoch 76/100
 - 0s - loss: 0.0318 - val_loss: 0.0257
Epoch 77/100
 - 0s - loss: 0.0309 - val_loss: 0.0145
Epoch 78/100
 - 0s - loss: 0.0283 - val_loss: 0.0079
Epoch 79/100
 - 0s - loss: 0.0302 - val_loss: 0.0077
Epoch 80/100
 - 0s - loss: 0.0336 - val_loss: 0.0243
Epoch 81/100
 - 0s - loss: 0.0390 - val_loss: 0.0178
Epoch 82/100
 - 0s - loss: 0.0287 - val_loss: 0.0140
Epoch 83/100
 - 0s - loss: 0.0288 - val_loss: 0.0088
Epoch 84/100
 - 0s - loss: 0.0280 - val_loss: 0.0200
Epoch 85/100
 - 0s - loss: 0.0321 - val_loss: 0.0162
Epoch 86/100
 - 0s - loss: 0.0276 - val_loss: 0.0149
Epoch 87/100
 - 0s - loss: 0.0273 - val_loss: 0.0107
Epoch 88/100
 - 0s - loss: 0.0268 - val_loss: 0.0175
Epoch 89/100
 - 0s - loss: 0.0277 - val_loss: 0.0231
Epoch 90/100
 - 0s - loss: 0.0279 - val_loss: 0.0153
Epoch 91/100
 - 0s - loss: 0.0258 - val_loss: 0.0078
Epoch 92/100
 - 0s - loss: 0.0331 - val_loss: 0.0086
Epoch 93/100
 - 0s - loss: 0.0344 - val_loss: 0.0185
Epoch 94/100
 - 0s - loss: 0.0351 - val_loss: 0.0154
Epoch 95/100
 - 0s - loss: 0.0309 - val_loss: 0.0088
Epoch 96/100
 - 0s - loss: 0.0311 - val_loss: 0.0086
Epoch 97/100
 - 0s - loss: 0.0333 - val_loss: 0.0169
Epoch 98/100
 - 0s - loss: 0.0327 - val_loss: 0.0160
Epoch 99/100
 - 0s - loss: 0.0270 - val_loss: 0.0077
Epoch 100/100
 - 0s - loss: 0.0284 - val_loss: 0.0077
In [104]:
model_prediction('Inflation')
Test RMSE: 0.128
  • we see that prediction line follows similar trend like actual data. With time permits, we can improve the models by adding stacks or tuning hyper parameters etc.

Future multiple months forcasting

  • Using train LSTM model, we predict future 12 months inflation rate.
  • We use one step moving forward approch to predict next 12 months ( year 2016 ) Infaltion rate month over month.
  • first we create 6 individual models to predict each economy variables for next month base on previous 12 months of information.
  • We use predicted economical variables as a input and predict next month's Inflation rate one step at a time for 12 forward months.
In [26]:
df.columns
Out[26]:
Index(['Month', 'Inflation', 'Wage', 'Unemployment', 'Consumption',
       'Investment', 'InterestRate'],
      dtype='object')

Model screations

  • Model creation for remaining economical variables (wage,unemployment,consumption,investment,interestrate )
  • model created and saved in local location
In [67]:
model_training(5, 'model_Wage')
model_prediction('Wage')
Train on 540 samples, validate on 60 samples
Epoch 1/100
 - 2s - loss: 0.2614 - val_loss: 0.1974
Epoch 2/100
 - 0s - loss: 0.1391 - val_loss: 0.0680
Epoch 3/100
 - 0s - loss: 0.1424 - val_loss: 0.0450
Epoch 4/100
 - 0s - loss: 0.0963 - val_loss: 0.0479
Epoch 5/100
 - 0s - loss: 0.0818 - val_loss: 0.0503
Epoch 6/100
 - 0s - loss: 0.0794 - val_loss: 0.0711
Epoch 7/100
 - 0s - loss: 0.0867 - val_loss: 0.0621
Epoch 8/100
 - 0s - loss: 0.0782 - val_loss: 0.0497
Epoch 9/100
 - 0s - loss: 0.0719 - val_loss: 0.0470
Epoch 10/100
 - 0s - loss: 0.0836 - val_loss: 0.0376
Epoch 11/100
 - 0s - loss: 0.0954 - val_loss: 0.0379
Epoch 12/100
 - 0s - loss: 0.0569 - val_loss: 0.0392
Epoch 13/100
 - 0s - loss: 0.0605 - val_loss: 0.0371
Epoch 14/100
 - 0s - loss: 0.0575 - val_loss: 0.0424
Epoch 15/100
 - 0s - loss: 0.0637 - val_loss: 0.0414
Epoch 16/100
 - 0s - loss: 0.0758 - val_loss: 0.0576
Epoch 17/100
 - 0s - loss: 0.0786 - val_loss: 0.0388
Epoch 18/100
 - 0s - loss: 0.0956 - val_loss: 0.0375
Epoch 19/100
 - 0s - loss: 0.0823 - val_loss: 0.0372
Epoch 20/100
 - 0s - loss: 0.0801 - val_loss: 0.0359
Epoch 21/100
 - 0s - loss: 0.0566 - val_loss: 0.0445
Epoch 22/100
 - 0s - loss: 0.0509 - val_loss: 0.0374
Epoch 23/100
 - 0s - loss: 0.0512 - val_loss: 0.0382
Epoch 24/100
 - 0s - loss: 0.0496 - val_loss: 0.0373
Epoch 25/100
 - 0s - loss: 0.0536 - val_loss: 0.0381
Epoch 26/100
 - 0s - loss: 0.0510 - val_loss: 0.0392
Epoch 27/100
 - 0s - loss: 0.0544 - val_loss: 0.0410
Epoch 28/100
 - 0s - loss: 0.0554 - val_loss: 0.0446
Epoch 29/100
 - 0s - loss: 0.0651 - val_loss: 0.0356
Epoch 30/100
 - 0s - loss: 0.0852 - val_loss: 0.0360
Epoch 31/100
 - 0s - loss: 0.0527 - val_loss: 0.0376
Epoch 32/100
 - 0s - loss: 0.0497 - val_loss: 0.0361
Epoch 33/100
 - 0s - loss: 0.0512 - val_loss: 0.0347
Epoch 34/100
 - 0s - loss: 0.0712 - val_loss: 0.0406
Epoch 35/100
 - 0s - loss: 0.0637 - val_loss: 0.0446
Epoch 36/100
 - 0s - loss: 0.0583 - val_loss: 0.0381
Epoch 37/100
 - 0s - loss: 0.0550 - val_loss: 0.0339
Epoch 38/100
 - 0s - loss: 0.0687 - val_loss: 0.0357
Epoch 39/100
 - 0s - loss: 0.0533 - val_loss: 0.0423
Epoch 40/100
 - 0s - loss: 0.0490 - val_loss: 0.0373
Epoch 41/100
 - 0s - loss: 0.0485 - val_loss: 0.0342
Epoch 42/100
 - 0s - loss: 0.0554 - val_loss: 0.0388
Epoch 43/100
 - 0s - loss: 0.0562 - val_loss: 0.0402
Epoch 44/100
 - 0s - loss: 0.0525 - val_loss: 0.0384
Epoch 45/100
 - 0s - loss: 0.0560 - val_loss: 0.0337
Epoch 46/100
 - 0s - loss: 0.0673 - val_loss: 0.0345
Epoch 47/100
 - 0s - loss: 0.0532 - val_loss: 0.0385
Epoch 48/100
 - 0s - loss: 0.0469 - val_loss: 0.0355
Epoch 49/100
 - 0s - loss: 0.0445 - val_loss: 0.0353
Epoch 50/100
 - 0s - loss: 0.0438 - val_loss: 0.0351
Epoch 51/100
 - 0s - loss: 0.0441 - val_loss: 0.0359
Epoch 52/100
 - 0s - loss: 0.0451 - val_loss: 0.0389
Epoch 53/100
 - 0s - loss: 0.0435 - val_loss: 0.0358
Epoch 54/100
 - 0s - loss: 0.0442 - val_loss: 0.0354
Epoch 55/100
 - 0s - loss: 0.0461 - val_loss: 0.0344
Epoch 56/100
 - 0s - loss: 0.0495 - val_loss: 0.0377
Epoch 57/100
 - 0s - loss: 0.0459 - val_loss: 0.0395
Epoch 58/100
 - 0s - loss: 0.0485 - val_loss: 0.0361
Epoch 59/100
 - 0s - loss: 0.0456 - val_loss: 0.0368
Epoch 60/100
 - 0s - loss: 0.0500 - val_loss: 0.0335
Epoch 61/100
 - 0s - loss: 0.0611 - val_loss: 0.0360
Epoch 62/100
 - 0s - loss: 0.0543 - val_loss: 0.0434
Epoch 63/100
 - 0s - loss: 0.0534 - val_loss: 0.0355
Epoch 64/100
 - 0s - loss: 0.0519 - val_loss: 0.0333
Epoch 65/100
 - 0s - loss: 0.0762 - val_loss: 0.0376
Epoch 66/100
 - 0s - loss: 0.0447 - val_loss: 0.0363
Epoch 67/100
 - 0s - loss: 0.0458 - val_loss: 0.0344
Epoch 68/100
 - 0s - loss: 0.0467 - val_loss: 0.0356
Epoch 69/100
 - 0s - loss: 0.0487 - val_loss: 0.0402
Epoch 70/100
 - 0s - loss: 0.0460 - val_loss: 0.0361
Epoch 71/100
 - 0s - loss: 0.0436 - val_loss: 0.0354
Epoch 72/100
 - 0s - loss: 0.0435 - val_loss: 0.0341
Epoch 73/100
 - 0s - loss: 0.0469 - val_loss: 0.0356
Epoch 74/100
 - 0s - loss: 0.0416 - val_loss: 0.0356
Epoch 75/100
 - 0s - loss: 0.0406 - val_loss: 0.0348
Epoch 76/100
 - 0s - loss: 0.0415 - val_loss: 0.0351
Epoch 77/100
 - 0s - loss: 0.0444 - val_loss: 0.0393
Epoch 78/100
 - 0s - loss: 0.0460 - val_loss: 0.0361
Epoch 79/100
 - 0s - loss: 0.0444 - val_loss: 0.0364
Epoch 80/100
 - 0s - loss: 0.0497 - val_loss: 0.0340
Epoch 81/100
 - 0s - loss: 0.0538 - val_loss: 0.0340
Epoch 82/100
 - 0s - loss: 0.0523 - val_loss: 0.0393
Epoch 83/100
 - 0s - loss: 0.0439 - val_loss: 0.0358
Epoch 84/100
 - 0s - loss: 0.0457 - val_loss: 0.0345
Epoch 85/100
 - 0s - loss: 0.0456 - val_loss: 0.0348
Epoch 86/100
 - 0s - loss: 0.0521 - val_loss: 0.0361
Epoch 87/100
 - 0s - loss: 0.0431 - val_loss: 0.0356
Epoch 88/100
 - 0s - loss: 0.0445 - val_loss: 0.0339
Epoch 89/100
 - 0s - loss: 0.0470 - val_loss: 0.0354
Epoch 90/100
 - 0s - loss: 0.0467 - val_loss: 0.0374
Epoch 91/100
 - 0s - loss: 0.0427 - val_loss: 0.0358
Epoch 92/100
 - 0s - loss: 0.0444 - val_loss: 0.0350
Epoch 93/100
 - 0s - loss: 0.0429 - val_loss: 0.0358
Epoch 94/100
 - 0s - loss: 0.0467 - val_loss: 0.0374
Epoch 95/100
 - 0s - loss: 0.0410 - val_loss: 0.0357
Epoch 96/100
 - 0s - loss: 0.0415 - val_loss: 0.0353
Epoch 97/100
 - 0s - loss: 0.0401 - val_loss: 0.0345
Epoch 98/100
 - 0s - loss: 0.0406 - val_loss: 0.0362
Epoch 99/100
 - 0s - loss: 0.0409 - val_loss: 0.0384
Epoch 100/100
 - 0s - loss: 0.0416 - val_loss: 0.0364
Test RMSE: 0.596
In [68]:
model_training(4, 'model_Unemployment')
model_prediction('Unemployment')
Train on 540 samples, validate on 60 samples
Epoch 1/100
 - 2s - loss: 0.2379 - val_loss: 0.1552
Epoch 2/100
 - 0s - loss: 0.1676 - val_loss: 0.1693
Epoch 3/100
 - 0s - loss: 0.1319 - val_loss: 0.0943
Epoch 4/100
 - 0s - loss: 0.0868 - val_loss: 0.0392
Epoch 5/100
 - 0s - loss: 0.0891 - val_loss: 0.0705
Epoch 6/100
 - 0s - loss: 0.0784 - val_loss: 0.0668
Epoch 7/100
 - 0s - loss: 0.1048 - val_loss: 0.0722
Epoch 8/100
 - 0s - loss: 0.0760 - val_loss: 0.0442
Epoch 9/100
 - 0s - loss: 0.0634 - val_loss: 0.0474
Epoch 10/100
 - 0s - loss: 0.0669 - val_loss: 0.0346
Epoch 11/100
 - 0s - loss: 0.0855 - val_loss: 0.0527
Epoch 12/100
 - 0s - loss: 0.0694 - val_loss: 0.0353
Epoch 13/100
 - 0s - loss: 0.0599 - val_loss: 0.0618
Epoch 14/100
 - 0s - loss: 0.0606 - val_loss: 0.0333
Epoch 15/100
 - 0s - loss: 0.0670 - val_loss: 0.0430
Epoch 16/100
 - 0s - loss: 0.0676 - val_loss: 0.0402
Epoch 17/100
 - 0s - loss: 0.0556 - val_loss: 0.0626
Epoch 18/100
 - 0s - loss: 0.0649 - val_loss: 0.0271
Epoch 19/100
 - 0s - loss: 0.0659 - val_loss: 0.0457
Epoch 20/100
 - 0s - loss: 0.0646 - val_loss: 0.0350
Epoch 21/100
 - 0s - loss: 0.0480 - val_loss: 0.0365
Epoch 22/100
 - 0s - loss: 0.0469 - val_loss: 0.0328
Epoch 23/100
 - 0s - loss: 0.0457 - val_loss: 0.0290
Epoch 24/100
 - 0s - loss: 0.0522 - val_loss: 0.0296
Epoch 25/100
 - 0s - loss: 0.0474 - val_loss: 0.0548
Epoch 26/100
 - 0s - loss: 0.0547 - val_loss: 0.0564
Epoch 27/100
 - 0s - loss: 0.0487 - val_loss: 0.0284
Epoch 28/100
 - 0s - loss: 0.0541 - val_loss: 0.0364
Epoch 29/100
 - 0s - loss: 0.0534 - val_loss: 0.0337
Epoch 30/100
 - 0s - loss: 0.0416 - val_loss: 0.0654
Epoch 31/100
 - 0s - loss: 0.0503 - val_loss: 0.0286
Epoch 32/100
 - 0s - loss: 0.0481 - val_loss: 0.0304
Epoch 33/100
 - 0s - loss: 0.0576 - val_loss: 0.0414
Epoch 34/100
 - 0s - loss: 0.0433 - val_loss: 0.0625
Epoch 35/100
 - 0s - loss: 0.0461 - val_loss: 0.0273
Epoch 36/100
 - 0s - loss: 0.0539 - val_loss: 0.0330
Epoch 37/100
 - 0s - loss: 0.0451 - val_loss: 0.0455
Epoch 38/100
 - 0s - loss: 0.0421 - val_loss: 0.0538
Epoch 39/100
 - 0s - loss: 0.0404 - val_loss: 0.0290
Epoch 40/100
 - 0s - loss: 0.0447 - val_loss: 0.0283
Epoch 41/100
 - 0s - loss: 0.0485 - val_loss: 0.0318
Epoch 42/100
 - 0s - loss: 0.0394 - val_loss: 0.0592
Epoch 43/100
 - 0s - loss: 0.0427 - val_loss: 0.0384
Epoch 44/100
 - 0s - loss: 0.0437 - val_loss: 0.0301
Epoch 45/100
 - 0s - loss: 0.0526 - val_loss: 0.0350
Epoch 46/100
 - 0s - loss: 0.0405 - val_loss: 0.0452
Epoch 47/100
 - 0s - loss: 0.0345 - val_loss: 0.0314
Epoch 48/100
 - 0s - loss: 0.0347 - val_loss: 0.0294
Epoch 49/100
 - 0s - loss: 0.0357 - val_loss: 0.0243
Epoch 50/100
 - 0s - loss: 0.0396 - val_loss: 0.0374
Epoch 51/100
 - 0s - loss: 0.0345 - val_loss: 0.0582
Epoch 52/100
 - 0s - loss: 0.0358 - val_loss: 0.0331
Epoch 53/100
 - 0s - loss: 0.0326 - val_loss: 0.0348
Epoch 54/100
 - 0s - loss: 0.0351 - val_loss: 0.0230
Epoch 55/100
 - 0s - loss: 0.0411 - val_loss: 0.0231
Epoch 56/100
 - 0s - loss: 0.0351 - val_loss: 0.0423
Epoch 57/100
 - 0s - loss: 0.0351 - val_loss: 0.0401
Epoch 58/100
 - 0s - loss: 0.0340 - val_loss: 0.0283
Epoch 59/100
 - 0s - loss: 0.0371 - val_loss: 0.0243
Epoch 60/100
 - 0s - loss: 0.0420 - val_loss: 0.0319
Epoch 61/100
 - 0s - loss: 0.0358 - val_loss: 0.0525
Epoch 62/100
 - 0s - loss: 0.0418 - val_loss: 0.0424
Epoch 63/100
 - 0s - loss: 0.0360 - val_loss: 0.0254
Epoch 64/100
 - 0s - loss: 0.0539 - val_loss: 0.0295
Epoch 65/100
 - 0s - loss: 0.0402 - val_loss: 0.0360
Epoch 66/100
 - 0s - loss: 0.0323 - val_loss: 0.0482
Epoch 67/100
 - 0s - loss: 0.0382 - val_loss: 0.0217
Epoch 68/100
 - 0s - loss: 0.0354 - val_loss: 0.0227
Epoch 69/100
 - 0s - loss: 0.0380 - val_loss: 0.0258
Epoch 70/100
 - 0s - loss: 0.0326 - val_loss: 0.0422
Epoch 71/100
 - 0s - loss: 0.0354 - val_loss: 0.0460
Epoch 72/100
 - 0s - loss: 0.0334 - val_loss: 0.0222
Epoch 73/100
 - 0s - loss: 0.0412 - val_loss: 0.0274
Epoch 74/100
 - 0s - loss: 0.0431 - val_loss: 0.0294
Epoch 75/100
 - 0s - loss: 0.0304 - val_loss: 0.0553
Epoch 76/100
 - 0s - loss: 0.0360 - val_loss: 0.0380
Epoch 77/100
 - 0s - loss: 0.0346 - val_loss: 0.0249
Epoch 78/100
 - 0s - loss: 0.0534 - val_loss: 0.0345
Epoch 79/100
 - 0s - loss: 0.0379 - val_loss: 0.0317
Epoch 80/100
 - 0s - loss: 0.0310 - val_loss: 0.0314
Epoch 81/100
 - 0s - loss: 0.0314 - val_loss: 0.0212
Epoch 82/100
 - 0s - loss: 0.0313 - val_loss: 0.0210
Epoch 83/100
 - 0s - loss: 0.0339 - val_loss: 0.0241
Epoch 84/100
 - 0s - loss: 0.0307 - val_loss: 0.0328
Epoch 85/100
 - 0s - loss: 0.0295 - val_loss: 0.0285
Epoch 86/100
 - 0s - loss: 0.0277 - val_loss: 0.0237
Epoch 87/100
 - 0s - loss: 0.0270 - val_loss: 0.0264
Epoch 88/100
 - 0s - loss: 0.0262 - val_loss: 0.0212
Epoch 89/100
 - 0s - loss: 0.0273 - val_loss: 0.0206
Epoch 90/100
 - 0s - loss: 0.0286 - val_loss: 0.0215
Epoch 91/100
 - 0s - loss: 0.0281 - val_loss: 0.0226
Epoch 92/100
 - 0s - loss: 0.0292 - val_loss: 0.0260
Epoch 93/100
 - 0s - loss: 0.0289 - val_loss: 0.0218
Epoch 94/100
 - 0s - loss: 0.0266 - val_loss: 0.0204
Epoch 95/100
 - 0s - loss: 0.0279 - val_loss: 0.0196
Epoch 96/100
 - 0s - loss: 0.0313 - val_loss: 0.0190
Epoch 97/100
 - 0s - loss: 0.0295 - val_loss: 0.0339
Epoch 98/100
 - 0s - loss: 0.0326 - val_loss: 0.0450
Epoch 99/100
 - 0s - loss: 0.0312 - val_loss: 0.0327
Epoch 100/100
 - 0s - loss: 0.0312 - val_loss: 0.0223
Test RMSE: 0.357
In [69]:
model_training(3, 'model_Consumption')
model_prediction('Consumption')
Train on 540 samples, validate on 60 samples
Epoch 1/100
 - 2s - loss: 0.2404 - val_loss: 0.0597
Epoch 2/100
 - 0s - loss: 0.1002 - val_loss: 0.0331
Epoch 3/100
 - 0s - loss: 0.0874 - val_loss: 0.0318
Epoch 4/100
 - 0s - loss: 0.0863 - val_loss: 0.0448
Epoch 5/100
 - 0s - loss: 0.0808 - val_loss: 0.0334
Epoch 6/100
 - 0s - loss: 0.0778 - val_loss: 0.0386
Epoch 7/100
 - 0s - loss: 0.0777 - val_loss: 0.0440
Epoch 8/100
 - 0s - loss: 0.0731 - val_loss: 0.0275
Epoch 9/100
 - 0s - loss: 0.0769 - val_loss: 0.0239
Epoch 10/100
 - 0s - loss: 0.0724 - val_loss: 0.0339
Epoch 11/100
 - 0s - loss: 0.0752 - val_loss: 0.0303
Epoch 12/100
 - 0s - loss: 0.0810 - val_loss: 0.0326
Epoch 13/100
 - 0s - loss: 0.0938 - val_loss: 0.0570
Epoch 14/100
 - 0s - loss: 0.0748 - val_loss: 0.0395
Epoch 15/100
 - 0s - loss: 0.0754 - val_loss: 0.0240
Epoch 16/100
 - 0s - loss: 0.0889 - val_loss: 0.0305
Epoch 17/100
 - 0s - loss: 0.0707 - val_loss: 0.0255
Epoch 18/100
 - 0s - loss: 0.0682 - val_loss: 0.0452
Epoch 19/100
 - 0s - loss: 0.0742 - val_loss: 0.0347
Epoch 20/100
 - 0s - loss: 0.0633 - val_loss: 0.0315
Epoch 21/100
 - 0s - loss: 0.0665 - val_loss: 0.0235
Epoch 22/100
 - 0s - loss: 0.0668 - val_loss: 0.0260
Epoch 23/100
 - 0s - loss: 0.0636 - val_loss: 0.0230
Epoch 24/100
 - 0s - loss: 0.0646 - val_loss: 0.0430
Epoch 25/100
 - 0s - loss: 0.0754 - val_loss: 0.0604
Epoch 26/100
 - 0s - loss: 0.0652 - val_loss: 0.0461
Epoch 27/100
 - 0s - loss: 0.0633 - val_loss: 0.0236
Epoch 28/100
 - 0s - loss: 0.0670 - val_loss: 0.0262
Epoch 29/100
 - 0s - loss: 0.0693 - val_loss: 0.0261
Epoch 30/100
 - 0s - loss: 0.0613 - val_loss: 0.0230
Epoch 31/100
 - 0s - loss: 0.0652 - val_loss: 0.0363
Epoch 32/100
 - 0s - loss: 0.0655 - val_loss: 0.0384
Epoch 33/100
 - 0s - loss: 0.0618 - val_loss: 0.0277
Epoch 34/100
 - 0s - loss: 0.0620 - val_loss: 0.0228
Epoch 35/100
 - 0s - loss: 0.0642 - val_loss: 0.0233
Epoch 36/100
 - 0s - loss: 0.0677 - val_loss: 0.0248
Epoch 37/100
 - 0s - loss: 0.0614 - val_loss: 0.0315
Epoch 38/100
 - 0s - loss: 0.0683 - val_loss: 0.0284
Epoch 39/100
 - 0s - loss: 0.0579 - val_loss: 0.0224
Epoch 40/100
 - 0s - loss: 0.0638 - val_loss: 0.0237
Epoch 41/100
 - 0s - loss: 0.0602 - val_loss: 0.0226
Epoch 42/100
 - 0s - loss: 0.0591 - val_loss: 0.0227
Epoch 43/100
 - 0s - loss: 0.0593 - val_loss: 0.0402
Epoch 44/100
 - 0s - loss: 0.0636 - val_loss: 0.0279
Epoch 45/100
 - 0s - loss: 0.0577 - val_loss: 0.0221
Epoch 46/100
 - 0s - loss: 0.0643 - val_loss: 0.0220
Epoch 47/100
 - 0s - loss: 0.0645 - val_loss: 0.0228
Epoch 48/100
 - 0s - loss: 0.0556 - val_loss: 0.0216
Epoch 49/100
 - 0s - loss: 0.0568 - val_loss: 0.0223
Epoch 50/100
 - 0s - loss: 0.0526 - val_loss: 0.0230
Epoch 51/100
 - 0s - loss: 0.0537 - val_loss: 0.0223
Epoch 52/100
 - 0s - loss: 0.0563 - val_loss: 0.0211
Epoch 53/100
 - 0s - loss: 0.0531 - val_loss: 0.0209
Epoch 54/100
 - 0s - loss: 0.0527 - val_loss: 0.0301
Epoch 55/100
 - 0s - loss: 0.0558 - val_loss: 0.0330
Epoch 56/100
 - 0s - loss: 0.0577 - val_loss: 0.0219
Epoch 57/100
 - 0s - loss: 0.0533 - val_loss: 0.0213
Epoch 58/100
 - 0s - loss: 0.0572 - val_loss: 0.0226
Epoch 59/100
 - 0s - loss: 0.0638 - val_loss: 0.0237
Epoch 60/100
 - 0s - loss: 0.0591 - val_loss: 0.0238
Epoch 61/100
 - 0s - loss: 0.0614 - val_loss: 0.0322
Epoch 62/100
 - 0s - loss: 0.0594 - val_loss: 0.0256
Epoch 63/100
 - 0s - loss: 0.0526 - val_loss: 0.0209
Epoch 64/100
 - 0s - loss: 0.0549 - val_loss: 0.0211
Epoch 65/100
 - 0s - loss: 0.0530 - val_loss: 0.0214
Epoch 66/100
 - 0s - loss: 0.0524 - val_loss: 0.0218
Epoch 67/100
 - 0s - loss: 0.0533 - val_loss: 0.0239
Epoch 68/100
 - 0s - loss: 0.0508 - val_loss: 0.0206
Epoch 69/100
 - 0s - loss: 0.0527 - val_loss: 0.0201
Epoch 70/100
 - 0s - loss: 0.0497 - val_loss: 0.0200
Epoch 71/100
 - 0s - loss: 0.0492 - val_loss: 0.0196
Epoch 72/100
 - 0s - loss: 0.0513 - val_loss: 0.0201
Epoch 73/100
 - 0s - loss: 0.0516 - val_loss: 0.0210
Epoch 74/100
 - 0s - loss: 0.0498 - val_loss: 0.0262
Epoch 75/100
 - 0s - loss: 0.0528 - val_loss: 0.0274
Epoch 76/100
 - 0s - loss: 0.0522 - val_loss: 0.0254
Epoch 77/100
 - 0s - loss: 0.0498 - val_loss: 0.0250
Epoch 78/100
 - 0s - loss: 0.0507 - val_loss: 0.0215
Epoch 79/100
 - 0s - loss: 0.0549 - val_loss: 0.0240
Epoch 80/100
 - 0s - loss: 0.0632 - val_loss: 0.0236
Epoch 81/100
 - 0s - loss: 0.0583 - val_loss: 0.0202
Epoch 82/100
 - 0s - loss: 0.0613 - val_loss: 0.0262
Epoch 83/100
 - 0s - loss: 0.0583 - val_loss: 0.0200
Epoch 84/100
 - 0s - loss: 0.0502 - val_loss: 0.0206
Epoch 85/100
 - 0s - loss: 0.0509 - val_loss: 0.0211
Epoch 86/100
 - 0s - loss: 0.0518 - val_loss: 0.0205
Epoch 87/100
 - 0s - loss: 0.0508 - val_loss: 0.0194
Epoch 88/100
 - 0s - loss: 0.0513 - val_loss: 0.0227
Epoch 89/100
 - 0s - loss: 0.0521 - val_loss: 0.0257
Epoch 90/100
 - 0s - loss: 0.0471 - val_loss: 0.0228
Epoch 91/100
 - 0s - loss: 0.0514 - val_loss: 0.0195
Epoch 92/100
 - 0s - loss: 0.0532 - val_loss: 0.0194
Epoch 93/100
 - 0s - loss: 0.0490 - val_loss: 0.0216
Epoch 94/100
 - 0s - loss: 0.0514 - val_loss: 0.0237
Epoch 95/100
 - 0s - loss: 0.0504 - val_loss: 0.0323
Epoch 96/100
 - 0s - loss: 0.0513 - val_loss: 0.0200
Epoch 97/100
 - 0s - loss: 0.0505 - val_loss: 0.0191
Epoch 98/100
 - 0s - loss: 0.0561 - val_loss: 0.0195
Epoch 99/100
 - 0s - loss: 0.0523 - val_loss: 0.0194
Epoch 100/100
 - 0s - loss: 0.0503 - val_loss: 0.0205
Test RMSE: 0.339
In [70]:
model_training(2, 'model_Investment')
model_prediction('Investment')
Train on 540 samples, validate on 60 samples
Epoch 1/100
 - 3s - loss: 0.2736 - val_loss: 0.1182
Epoch 2/100
 - 0s - loss: 0.2341 - val_loss: 0.1720
Epoch 3/100
 - 0s - loss: 0.1312 - val_loss: 0.0694
Epoch 4/100
 - 0s - loss: 0.1225 - val_loss: 0.0687
Epoch 5/100
 - 0s - loss: 0.1101 - val_loss: 0.0907
Epoch 6/100
 - 0s - loss: 0.1064 - val_loss: 0.0787
Epoch 7/100
 - 0s - loss: 0.1025 - val_loss: 0.0769
Epoch 8/100
 - 0s - loss: 0.0984 - val_loss: 0.0778
Epoch 9/100
 - 0s - loss: 0.0946 - val_loss: 0.0738
Epoch 10/100
 - 0s - loss: 0.0919 - val_loss: 0.0838
Epoch 11/100
 - 0s - loss: 0.0894 - val_loss: 0.0960
Epoch 12/100
 - 0s - loss: 0.0909 - val_loss: 0.1007
Epoch 13/100
 - 0s - loss: 0.0873 - val_loss: 0.0899
Epoch 14/100
 - 0s - loss: 0.0836 - val_loss: 0.0697
Epoch 15/100
 - 0s - loss: 0.0870 - val_loss: 0.0585
Epoch 16/100
 - 0s - loss: 0.0900 - val_loss: 0.0673
Epoch 17/100
 - 0s - loss: 0.0852 - val_loss: 0.0806
Epoch 18/100
 - 0s - loss: 0.0838 - val_loss: 0.0818
Epoch 19/100
 - 0s - loss: 0.0797 - val_loss: 0.0637
Epoch 20/100
 - 0s - loss: 0.0888 - val_loss: 0.0572
Epoch 21/100
 - 0s - loss: 0.0865 - val_loss: 0.0756
Epoch 22/100
 - 0s - loss: 0.0877 - val_loss: 0.0671
Epoch 23/100
 - 0s - loss: 0.0764 - val_loss: 0.0688
Epoch 24/100
 - 0s - loss: 0.0801 - val_loss: 0.0578
Epoch 25/100
 - 0s - loss: 0.0792 - val_loss: 0.0667
Epoch 26/100
 - 0s - loss: 0.0809 - val_loss: 0.0791
Epoch 27/100
 - 0s - loss: 0.0720 - val_loss: 0.0585
Epoch 28/100
 - 0s - loss: 0.0803 - val_loss: 0.0586
Epoch 29/100
 - 0s - loss: 0.0734 - val_loss: 0.0606
Epoch 30/100
 - 0s - loss: 0.0744 - val_loss: 0.0635
Epoch 31/100
 - 0s - loss: 0.0747 - val_loss: 0.0596
Epoch 32/100
 - 0s - loss: 0.0723 - val_loss: 0.0544
Epoch 33/100
 - 0s - loss: 0.0741 - val_loss: 0.0582
Epoch 34/100
 - 0s - loss: 0.0753 - val_loss: 0.0606
Epoch 35/100
 - 0s - loss: 0.0715 - val_loss: 0.0634
Epoch 36/100
 - 0s - loss: 0.0748 - val_loss: 0.0650
Epoch 37/100
 - 0s - loss: 0.0724 - val_loss: 0.0587
Epoch 38/100
 - 0s - loss: 0.0696 - val_loss: 0.0530
Epoch 39/100
 - 0s - loss: 0.0720 - val_loss: 0.0549
Epoch 40/100
 - 0s - loss: 0.0732 - val_loss: 0.0723
Epoch 41/100
 - 0s - loss: 0.0702 - val_loss: 0.0603
Epoch 42/100
 - 0s - loss: 0.0687 - val_loss: 0.0551
Epoch 43/100
 - 0s - loss: 0.0663 - val_loss: 0.0503
Epoch 44/100
 - 0s - loss: 0.0734 - val_loss: 0.0518
Epoch 45/100
 - 0s - loss: 0.0708 - val_loss: 0.0605
Epoch 46/100
 - 0s - loss: 0.0704 - val_loss: 0.0699
Epoch 47/100
 - 0s - loss: 0.0678 - val_loss: 0.0581
Epoch 48/100
 - 0s - loss: 0.0675 - val_loss: 0.0520
Epoch 49/100
 - 0s - loss: 0.0689 - val_loss: 0.0505
Epoch 50/100
 - 0s - loss: 0.0705 - val_loss: 0.0515
Epoch 51/100
 - 0s - loss: 0.0677 - val_loss: 0.0523
Epoch 52/100
 - 0s - loss: 0.0678 - val_loss: 0.0585
Epoch 53/100
 - 0s - loss: 0.0665 - val_loss: 0.0546
Epoch 54/100
 - 0s - loss: 0.0644 - val_loss: 0.0544
Epoch 55/100
 - 0s - loss: 0.0686 - val_loss: 0.0608
Epoch 56/100
 - 0s - loss: 0.0683 - val_loss: 0.0535
Epoch 57/100
 - 0s - loss: 0.0671 - val_loss: 0.0585
Epoch 58/100
 - 0s - loss: 0.0689 - val_loss: 0.0539
Epoch 59/100
 - 0s - loss: 0.0648 - val_loss: 0.0504
Epoch 60/100
 - 0s - loss: 0.0652 - val_loss: 0.0488
Epoch 61/100
 - 0s - loss: 0.0679 - val_loss: 0.0513
Epoch 62/100
 - 0s - loss: 0.0687 - val_loss: 0.0580
Epoch 63/100
 - 0s - loss: 0.0700 - val_loss: 0.0569
Epoch 64/100
 - 0s - loss: 0.0681 - val_loss: 0.0506
Epoch 65/100
 - 0s - loss: 0.0692 - val_loss: 0.0485
Epoch 66/100
 - 0s - loss: 0.0664 - val_loss: 0.0507
Epoch 67/100
 - 0s - loss: 0.0663 - val_loss: 0.0550
Epoch 68/100
 - 0s - loss: 0.0694 - val_loss: 0.0506
Epoch 69/100
 - 0s - loss: 0.0640 - val_loss: 0.0485
Epoch 70/100
 - 0s - loss: 0.0670 - val_loss: 0.0482
Epoch 71/100
 - 0s - loss: 0.0686 - val_loss: 0.0593
Epoch 72/100
 - 0s - loss: 0.0708 - val_loss: 0.0549
Epoch 73/100
 - 0s - loss: 0.0658 - val_loss: 0.0524
Epoch 74/100
 - 0s - loss: 0.0623 - val_loss: 0.0492
Epoch 75/100
 - 0s - loss: 0.0667 - val_loss: 0.0499
Epoch 76/100
 - 0s - loss: 0.0688 - val_loss: 0.0503
Epoch 77/100
 - 0s - loss: 0.0637 - val_loss: 0.0567
Epoch 78/100
 - 0s - loss: 0.0653 - val_loss: 0.0621
Epoch 79/100
 - 0s - loss: 0.0649 - val_loss: 0.0530
Epoch 80/100
 - 0s - loss: 0.0622 - val_loss: 0.0537
Epoch 81/100
 - 0s - loss: 0.0679 - val_loss: 0.0501
Epoch 82/100
 - 0s - loss: 0.0665 - val_loss: 0.0563
Epoch 83/100
 - 0s - loss: 0.0614 - val_loss: 0.0545
Epoch 84/100
 - 0s - loss: 0.0631 - val_loss: 0.0530
Epoch 85/100
 - 0s - loss: 0.0606 - val_loss: 0.0489
Epoch 86/100
 - 0s - loss: 0.0617 - val_loss: 0.0483
Epoch 87/100
 - 0s - loss: 0.0624 - val_loss: 0.0521
Epoch 88/100
 - 0s - loss: 0.0650 - val_loss: 0.0543
Epoch 89/100
 - 0s - loss: 0.0609 - val_loss: 0.0578
Epoch 90/100
 - 0s - loss: 0.0619 - val_loss: 0.0564
Epoch 91/100
 - 0s - loss: 0.0625 - val_loss: 0.0481
Epoch 92/100
 - 0s - loss: 0.0615 - val_loss: 0.0494
Epoch 93/100
 - 0s - loss: 0.0615 - val_loss: 0.0502
Epoch 94/100
 - 0s - loss: 0.0637 - val_loss: 0.0543
Epoch 95/100
 - 0s - loss: 0.0624 - val_loss: 0.0576
Epoch 96/100
 - 0s - loss: 0.0645 - val_loss: 0.0473
Epoch 97/100
 - 0s - loss: 0.0618 - val_loss: 0.0491
Epoch 98/100
 - 0s - loss: 0.0604 - val_loss: 0.0474
Epoch 99/100
 - 0s - loss: 0.0628 - val_loss: 0.0523
Epoch 100/100
 - 0s - loss: 0.0629 - val_loss: 0.0494
Test RMSE: 0.851
In [71]:
model_training(1, 'model_InterestRate')
model_prediction('InterestRate')
Train on 540 samples, validate on 60 samples
Epoch 1/100
 - 3s - loss: 0.1445 - val_loss: 0.1250
Epoch 2/100
 - 0s - loss: 0.1436 - val_loss: 0.0621
Epoch 3/100
 - 0s - loss: 0.1052 - val_loss: 0.1169
Epoch 4/100
 - 0s - loss: 0.0998 - val_loss: 0.0852
Epoch 5/100
 - 0s - loss: 0.0838 - val_loss: 0.0830
Epoch 6/100
 - 0s - loss: 0.0793 - val_loss: 0.0787
Epoch 7/100
 - 0s - loss: 0.0758 - val_loss: 0.0594
Epoch 8/100
 - 0s - loss: 0.0644 - val_loss: 0.0425
Epoch 9/100
 - 0s - loss: 0.0672 - val_loss: 0.0150
Epoch 10/100
 - 0s - loss: 0.0645 - val_loss: 0.0248
Epoch 11/100
 - 0s - loss: 0.0681 - val_loss: 0.0648
Epoch 12/100
 - 0s - loss: 0.0777 - val_loss: 0.0468
Epoch 13/100
 - 0s - loss: 0.0686 - val_loss: 0.0362
Epoch 14/100
 - 0s - loss: 0.0738 - val_loss: 0.0521
Epoch 15/100
 - 0s - loss: 0.0628 - val_loss: 0.0370
Epoch 16/100
 - 0s - loss: 0.0550 - val_loss: 0.0411
Epoch 17/100
 - 0s - loss: 0.0514 - val_loss: 0.0139
Epoch 18/100
 - 0s - loss: 0.0526 - val_loss: 0.0140
Epoch 19/100
 - 0s - loss: 0.0488 - val_loss: 0.0207
Epoch 20/100
 - 0s - loss: 0.0566 - val_loss: 0.0356
Epoch 21/100
 - 0s - loss: 0.0475 - val_loss: 0.0166
Epoch 22/100
 - 0s - loss: 0.0524 - val_loss: 0.0159
Epoch 23/100
 - 0s - loss: 0.0523 - val_loss: 0.0121
Epoch 24/100
 - 0s - loss: 0.0551 - val_loss: 0.0388
Epoch 25/100
 - 0s - loss: 0.0637 - val_loss: 0.0317
Epoch 26/100
 - 0s - loss: 0.0508 - val_loss: 0.0260
Epoch 27/100
 - 0s - loss: 0.0566 - val_loss: 0.0369
Epoch 28/100
 - 0s - loss: 0.0522 - val_loss: 0.0247
Epoch 29/100
 - 0s - loss: 0.0466 - val_loss: 0.0230
Epoch 30/100
 - 0s - loss: 0.0409 - val_loss: 0.0152
Epoch 31/100
 - 0s - loss: 0.0412 - val_loss: 0.0082
Epoch 32/100
 - 0s - loss: 0.0376 - val_loss: 0.0082
Epoch 33/100
 - 0s - loss: 0.0421 - val_loss: 0.0227
Epoch 34/100
 - 0s - loss: 0.0417 - val_loss: 0.0225
Epoch 35/100
 - 0s - loss: 0.0394 - val_loss: 0.0163
Epoch 36/100
 - 0s - loss: 0.0364 - val_loss: 0.0083
Epoch 37/100
 - 0s - loss: 0.0400 - val_loss: 0.0130
Epoch 38/100
 - 0s - loss: 0.0437 - val_loss: 0.0101
Epoch 39/100
 - 0s - loss: 0.0488 - val_loss: 0.0325
Epoch 40/100
 - 0s - loss: 0.0548 - val_loss: 0.0303
Epoch 41/100
 - 0s - loss: 0.0436 - val_loss: 0.0275
Epoch 42/100
 - 0s - loss: 0.0444 - val_loss: 0.0138
Epoch 43/100
 - 0s - loss: 0.0456 - val_loss: 0.0158
Epoch 44/100
 - 0s - loss: 0.0481 - val_loss: 0.0251
Epoch 45/100
 - 0s - loss: 0.0387 - val_loss: 0.0196
Epoch 46/100
 - 0s - loss: 0.0418 - val_loss: 0.0078
Epoch 47/100
 - 0s - loss: 0.0365 - val_loss: 0.0085
Epoch 48/100
 - 0s - loss: 0.0389 - val_loss: 0.0163
Epoch 49/100
 - 0s - loss: 0.0423 - val_loss: 0.0141
Epoch 50/100
 - 0s - loss: 0.0361 - val_loss: 0.0098
Epoch 51/100
 - 0s - loss: 0.0368 - val_loss: 0.0077
Epoch 52/100
 - 0s - loss: 0.0373 - val_loss: 0.0086
Epoch 53/100
 - 0s - loss: 0.0399 - val_loss: 0.0175
Epoch 54/100
 - 0s - loss: 0.0478 - val_loss: 0.0264
Epoch 55/100
 - 0s - loss: 0.0450 - val_loss: 0.0196
Epoch 56/100
 - 0s - loss: 0.0467 - val_loss: 0.0238
Epoch 57/100
 - 0s - loss: 0.0426 - val_loss: 0.0194
Epoch 58/100
 - 0s - loss: 0.0406 - val_loss: 0.0183
Epoch 59/100
 - 0s - loss: 0.0381 - val_loss: 0.0148
Epoch 60/100
 - 0s - loss: 0.0340 - val_loss: 0.0109
Epoch 61/100
 - 0s - loss: 0.0370 - val_loss: 0.0076
Epoch 62/100
 - 0s - loss: 0.0353 - val_loss: 0.0099
Epoch 63/100
 - 0s - loss: 0.0370 - val_loss: 0.0169
Epoch 64/100
 - 0s - loss: 0.0332 - val_loss: 0.0138
Epoch 65/100
 - 0s - loss: 0.0347 - val_loss: 0.0076
Epoch 66/100
 - 0s - loss: 0.0336 - val_loss: 0.0082
Epoch 67/100
 - 0s - loss: 0.0350 - val_loss: 0.0157
Epoch 68/100
 - 0s - loss: 0.0363 - val_loss: 0.0191
Epoch 69/100
 - 0s - loss: 0.0369 - val_loss: 0.0139
Epoch 70/100
 - 0s - loss: 0.0382 - val_loss: 0.0070
Epoch 71/100
 - 0s - loss: 0.0367 - val_loss: 0.0067
Epoch 72/100
 - 0s - loss: 0.0360 - val_loss: 0.0149
Epoch 73/100
 - 0s - loss: 0.0350 - val_loss: 0.0173
Epoch 74/100
 - 0s - loss: 0.0317 - val_loss: 0.0099
Epoch 75/100
 - 0s - loss: 0.0303 - val_loss: 0.0089
Epoch 76/100
 - 0s - loss: 0.0322 - val_loss: 0.0071
Epoch 77/100
 - 1s - loss: 0.0347 - val_loss: 0.0137
Epoch 78/100
 - 0s - loss: 0.0336 - val_loss: 0.0177
Epoch 79/100
 - 0s - loss: 0.0294 - val_loss: 0.0078
Epoch 80/100
 - 0s - loss: 0.0321 - val_loss: 0.0066
Epoch 81/100
 - 0s - loss: 0.0336 - val_loss: 0.0069
Epoch 82/100
 - 0s - loss: 0.0342 - val_loss: 0.0069
Epoch 83/100
 - 0s - loss: 0.0361 - val_loss: 0.0116
Epoch 84/100
 - 0s - loss: 0.0340 - val_loss: 0.0183
Epoch 85/100
 - 0s - loss: 0.0332 - val_loss: 0.0072
Epoch 86/100
 - 0s - loss: 0.0380 - val_loss: 0.0069
Epoch 87/100
 - 0s - loss: 0.0344 - val_loss: 0.0096
Epoch 88/100
 - 0s - loss: 0.0397 - val_loss: 0.0135
Epoch 89/100
 - 0s - loss: 0.0298 - val_loss: 0.0195
Epoch 90/100
 - 0s - loss: 0.0335 - val_loss: 0.0087
Epoch 91/100
 - 0s - loss: 0.0304 - val_loss: 0.0043
Epoch 92/100
 - 0s - loss: 0.0341 - val_loss: 0.0124
Epoch 93/100
 - 0s - loss: 0.0297 - val_loss: 0.0155
Epoch 94/100
 - 0s - loss: 0.0306 - val_loss: 0.0069
Epoch 95/100
 - 0s - loss: 0.0305 - val_loss: 0.0049
Epoch 96/100
 - 0s - loss: 0.0327 - val_loss: 0.0123
Epoch 97/100
 - 0s - loss: 0.0349 - val_loss: 0.0182
Epoch 98/100
 - 0s - loss: 0.0293 - val_loss: 0.0093
Epoch 99/100
 - 0s - loss: 0.0343 - val_loss: 0.0074
Epoch 100/100
 - 0s - loss: 0.0294 - val_loss: 0.0097
Test RMSE: 0.135
In [72]:
# load saved models

model_Inflation=load_model("/Users/akshay/Downloads/ADP_challange/ADP_lead_datascientist/models/model_Inflation.h5")
model_Wage=load_model("/Users/akshay/Downloads/ADP_challange/ADP_lead_datascientist/models/model_Wage.h5")
model_Unemployment=load_model("/Users/akshay/Downloads/ADP_challange/ADP_lead_datascientist/models/model_Unemployment.h5")
model_Consumption=load_model("/Users/akshay/Downloads/ADP_challange/ADP_lead_datascientist/models/model_Consumption.h5")
model_Investment=load_model("/Users/akshay/Downloads/ADP_challange/ADP_lead_datascientist/models/model_Investment.h5")
model_InterestRate=load_model("/Users/akshay/Downloads/ADP_challange/ADP_lead_datascientist/models/model_InterestRate.h5")
In [84]:
# initiate with the last elements of the test data set and predict all variables for next time steps
# we predict for future 6 months Inflation rate

# list of models
model_list=[model_Inflation,model_Wage,model_Unemployment,model_Consumption,model_Investment,model_InterestRate]

# extracting last 12 months of data for year 2015 so we can predict Jan-2016 and
# then using moving forward will predict Feb, March,April, MAy , June  2016.

tmp= test_X[59].reshape(1,n_past_months, n_features)

for i in [1,6]:
    ypred=[]
    for model in model_list:
        ypred.append(model.predict(tmp))

    # combined individual arrays to one array 
    ypred=np.concatenate(ypred)
    ypred    
    
    # reshape ypred to combine with input for next iternation
    ypred=ypred.reshape(1,6)
    
    # reshap intial input 
    tmp=tmp.reshape(1,n_past_months*n_features)
    
    # combined prediction with intial input and update input for next window
    tmp=concatenate((tmp,ypred),axis=1)
    tmp=tmp[0][-72:]

    # reshape tmp to pass in model 
    tmp=tmp.reshape(1,12,6)
In [105]:
# Plot the prediction for future months 

# invert scaling for forecast
# concatinate for consistancey in inverse transform
# might need to change slicing in test_x, check it in detail

inv_tmp=scaler.inverse_transform(tmp.reshape(12,6))

tt=concatenate((df.Inflation[-60:].values,inv_tmp[:,0]),axis=0)

plt.plot(df.Inflation[-60:].values, color = 'b', label = 'actual value')
plt.plot(tt, color = 'b', label = 'Predicted value',linestyle='dashed',markerfacecolor='blue')
plt.title('Inflation economy variable Prediction')
plt.xlabel('Time ( months)')
plt.ylabel('Inflation')
plt.legend()
plt.savefig("/Users/akshay/Downloads/ADP_challange/ADP_lead_datascientist/images/future_month_prediction.png")
plt.show()